198 research outputs found
Envelope Word and Gap Sequence in Doubling Sequence
Let be a factor of Doubling sequence , then
it occurs in the sequence infinitely many times. Let be the -th
occurrence of and be the gap between and
. In this paper, we discuss the structure of the gap sequence
. We prove that all factors can be divided into two
types, one type has exactly two distinct gaps and ,
the other type has exactly three distinct gaps , and
. We determine the expressions of gaps completely. And also give
the substitution of each gap sequence. The main tool in this paper is "envelope
word", which is a new notion, denoted by . As an application, we
determine the positions of all , discuss some combinatorial
properties of factors, and count the distinct squares beginning in
for .Comment: 14 pages, 7 figures. arXiv admin note: text overlap with
arXiv:1408.372
Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement
While diffusion models demonstrate a remarkable capability for generating
high-quality images, their tendency to `replicate' training data raises privacy
concerns. Although recent research suggests that this replication may stem from
the insufficient generalization of training data captions and duplication of
training images, effective mitigation strategies remain elusive. To address
this gap, our paper first introduces a generality score that measures the
caption generality and employ large language model (LLM) to generalize training
captions. Subsequently, we leverage generalized captions and propose a novel
dual fusion enhancement approach to mitigate the replication of diffusion
models. Our empirical results demonstrate that our proposed methods can
significantly reduce replication by 43.5% compared to the original diffusion
model while maintaining the diversity and quality of generations
Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference
Large number of ReLU and MAC operations of Deep neural networks make them
ill-suited for latency and compute-efficient private inference. In this paper,
we present a model optimization method that allows a model to learn to be
shallow. In particular, we leverage the ReLU sensitivity of a convolutional
block to remove a ReLU layer and merge its succeeding and preceding convolution
layers to a shallow block. Unlike existing ReLU reduction methods, our joint
reduction method can yield models with improved reduction of both ReLUs and
linear operations by up to 1.73x and 1.47x, respectively, evaluated with
ResNet18 on CIFAR-100 without any significant accuracy-drop
Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment
With the rapid growth of computing powers and recent advances in deep
learning, we have witnessed impressive demonstrations of novel robot
capabilities in research settings. Nonetheless, these learning systems exhibit
brittle generalization and require excessive training data for practical tasks.
To harness the capabilities of state-of-the-art robot learning models while
embracing their imperfections, we present Sirius, a principled framework for
humans and robots to collaborate through a division of work. In this framework,
partially autonomous robots are tasked with handling a major portion of
decision-making where they work reliably; meanwhile, human operators monitor
the process and intervene in challenging situations. Such a human-robot team
ensures safe deployments in complex tasks. Further, we introduce a new learning
algorithm to improve the policy's performance on the data collected from the
task executions. The core idea is re-weighing training samples with
approximated human trust and optimizing the policies with weighted behavioral
cloning. We evaluate Sirius in simulation and on real hardware, showing that
Sirius consistently outperforms baselines over a collection of contact-rich
manipulation tasks, achieving an 8% boost in simulation and 27% on real
hardware than the state-of-the-art methods, with twice faster convergence and
85% memory size reduction. Videos and code are available at
https://ut-austin-rpl.github.io/sirius
- β¦